Improving Evidential Deep Learning via Multi-Task Learning

نویسندگان

چکیده

The Evidential regression network (ENet) estimates a continuous target and its predictive uncertainty without costly Bayesian model averaging. However, it is possible that the inaccurately predicted due to gradient shrinkage problem of original loss function ENet, negative log marginal likelihood (NLL) loss. In this paper, objective improve prediction accuracy ENet while maintaining efficient estimation by resolving problem. A multi-task learning (MTL) framework, referred as MT-ENet, proposed accomplish aim. MTL, we define Lipschitz modified mean squared error (MSE) another add existing NLL MSE designed mitigate conflict with dynamically adjusting constant. By doing so, does not disturb MT-ENet enhances losing capability on synthetic dataset real-world benchmarks, including drug-target affinity (DTA) regression. Furthermore, shows remarkable calibration out-of-distribution detection DTA benchmarks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Automated Multi-task Learning

Multi-task learning (MTL) has recently contributed to learning better representations in service of various NLP tasks. MTL aims at improving the performance of a primary task, by jointly training on a secondary task. This paper introduces automated tasks, which exploit the sequential nature of the input data, as secondary tasks in an MTL model. We explore next word prediction, next character pr...

متن کامل

Deep Asymmetric Multi-task Feature Learning

We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process. Specifically, we introduce an asymmetric autoencoder term that allows predictors for the confident tasks to have high contribution to the feature learning while suppress...

متن کامل

Deep Multi-Task Learning with Shared Memory

Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model wi...

متن کامل

Trace Norm Regularised Deep Multi-Task Learning

We propose a framework for training multiple neural networks simultaneously. The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others’ parameters if possible – this is the main motivation behind multi-task learning. In contrast to many deep multitask learning models, we do not predefine a parameter sharing strategy by spe...

متن کامل

Multi-task learning with deep model based reinforcement learning

In recent years, model-free methods that use deep learning have achieved great success in many different reinforcement learning environments. Most successful approaches focus on solving a single task, while multi-task reinforcement learning remains an open problem. In this paper, we present a model based approach to deep reinforcement learning which we use to solve different tasks simultaneousl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i7.20759